Session List

Welcome Address - Prof. Paula McDonald


Keynote Address - Prof. Michael Milford


Keynote Address - Sovereignty as Practice: Rethinking Research from Indigenous Standpoints


Keynote Address - Learn how to work well with your HDR supervisor


Keynote Address - Amanda Miotto


Speaking industry’s language: how ON helps turn research into real-world value


CSIRO’s ON Innovation Program will host an interactive panel and hands-on workshop showing researchers how to move from “grant-speak” to partnership-ready impact stories that resonate with industry, government and investors.

Unpack the core ingredients for research commercialisation success - clarity, customers, community and impact - and show how these translate into compelling value propositions for non-academic partners.

The panel will explore real-world examples of ON teams who have used customer discovery to identify industry needs, navigate institutional pathways and build collaborations that accelerate translation, from licensing and joint ventures to startups and policy influence.

In the workshop, participants will use practical ON tools (including an Impact & Commercialisation Pathway Canvas and 30/60/90-day impact roadmaps) to reframe their own projects around problems, users and value, then craft concrete next steps towards an industry conversation.

Attendees will leave with a draft impact narrative, clearer language for describing benefits to partners, and a simple action plan to progress at least one potential collaboration opportunity.

Writing better, 'long-surviving' code as bioscientists


Most research code works… until it doesn’t. In this talk, I’ll walk through shifting from writing one-off scripts to building a real, deployable tool (in the genomics context with Python), and the software engineering principles that should matter for scaling and reusability. It is not a coding tutorial but a practical look at how small decisions in research code shape whether it quietly collapses under its own weight, illustrated with recommended practices and relevant supporting Python packages. It will also touch on publishing research packages for public use, with pip and conda.
Human-AI dialogue for research idea Generation


This interactive workshop explores how structured dialogue with AI can support research thinking, idea generation, and reflective learning.

Using examples from an experimental e-book project titled 'Idea Builder: A Billion-Dollar Brain', the session demonstrates how conversations between a human and AI can function as a cognitive workflow for developing research questions, clarifying concepts, and generating new ideas.

Participants will learn a simple Human-AI dialogue framework that transforms curiosity into researchable questions. Through guided exercises, attendees will experiment with AI-assisted thinking to explore their own research interests.

The workshop also introduces a broader philosophical perspective on learning beyond traditional classroom structures and discusses the emerging role of AI as a thinking partner in academic knowledge creation.

How to wrestle a billion words (of Hansard)


This workshop aims to show you how you can make the most of the Proceedings of Australian Federal Parliament (1901-present) in your research. We will start by showing you some of how Parliament works through their official transcripts and explore their search interface. Then we will show you how you can take things to the next level with computational tools to analyse a.) the passage of a specific piece of legislation and b.) explore how particular language is used since Australia's Federation.
NCBI Genome Portal essentials: Finding, viewing and comparing genomes


This hands-on workshop introduces participants to the National Center for Biotechnology Information (NCBI) database and some of its tools, with a focus on genomes, exploring annotations, and using genome browsers and comparative tools. Participants will learn how to efficiently locate genomic resources and interpret genome structure and annotation directly within NCBI.
Word sucks for long form writing! Thesis and manuscript writing using fit for purpose software


This workshop introduces software designed for long-form academic writing, highlighting its capacity to help researchers organise their thoughts, structure chapters, and support conceptual thinking. Software like Scrivener and its freeware alternatives are project-based systems, with entire manuscripts residing in a single hierarchised file. Unlike Word's document-centric approach, these programs integrate structure and content management, enabling researchers to store multiple drafts, compile references and notes, and to experiment with restructuring sections and chapters with ease. This focus prioritises developmental thinking over the pressure to produce polished writing immediately. Participants will learn how these programs enable 'writing for thinking' rather than 'writing up' and reduce the chaos of multiple open documents during thesis or manuscript development.
Practical LLMs for research: From concepts to workflows


This interactive session introduces researchers to practical uses of large language models (LLMs) across the research lifecycle. It combines conceptual understanding with hands-on demonstrations of real workflows.

Participants will:

  • Understand how LLMs work at a high level and their limitations
  • Learn effective prompting strategies and reasoning techniques
  • Explore literature review workflows using LLMs
  • Use LLMs for coding, data analysis, and research automation
  • Discuss reliability, ethics, and responsible use in research

The session is designed to be accessible across disciplines, with examples from scientific and data-driven research.

Mastering Open EcoAcoustics: From passive monitoring to AI species recognition


Passive Acoustic Monitoring (PAM) is revolutionizing how we study biodiversity, allowing us to listen to ecosystems at scales previously impossible. However, the sheer volume of audio data can be overwhelming. This hands-on workshop introduces Open Ecoacoustics, Australia's national infrastructure for managing and analysing environmental sound.

We will navigate the transition from deploying a sensor in the field to generating actionable conservation insights. Participants will receive a guided tour of Ecosounds and the A20 desktop tool, learning how to securely upload massive datasets and leverage cutting-edge AI. We’ll explore how tools like Google's Perch & BirdNet AI foundation models allow researchers to build species recognizers that scan thousands of hours of audio in minutes. Finally, we’ll discuss how to translate these acoustic detections into robust reports and downstream spatial models to inform policy and habitat management.

Relationship competencies for industry‑engaged research


Successful industry‑engaged research depends on more than technical expertise — it requires strong relational capability. This interactive workshop explores the relationship competencies researchers need to build, sustain and leverage effective industry partnerships.

Participants will examine practical strategies for trust‑building, boundary spanning, stakeholder intelligence, negotiation and co‑creation. Through applied exercises and real‑world scenarios, attendees will learn how to navigate differing institutional cultures, align expectations, manage risk and design mutually beneficial collaborations.

The session is relevant to HDR candidates, early career researchers and established academics seeking to strengthen the impact and translational potential of their research through industry collaboration.

Emergent conceptual frameworks in research design


Understand what an emergent conceptual framework is, identify research contexts and methodological approaches, develop an emergent conceptual framework based on research data, and learn how to write the methodology and discussion chapter.
Basic models and big impacts


What's the difference between important and interesting? How can I prove the impact of my project afterward? What's the direct value of this research?

Whether you're trying to raise a new project or focus an existing one, the humble spreadsheet can be a persuasive guide. And surprisingly easy to impress people with, since it's a convenient wrapping of some high-school algebra.

Post-PhD, I've been working to help get research out of the lab, and into the ocean. I'll discuss how technically-simple spreadsheets have directed research investment, demonstrated impacts at different scales, and reminded us of what matters once research leaves our hands. I'll also discuss the limits of the approach.

We'll also build a what-if model, to demonstrate the process in action, and show how the results can highlight the scope for new work ... and what will be difficult to justify.

Graphical storytelling: Designing publication-ready figures for your research


This workshop introduces researchers to the principles of graphical storytelling and the design of clear, publication-ready figures. It focuses on how to plan figures with purpose, so that each image supports the narrative of a paper, thesis, or presentation rather than simply filling space. Topics include a practical checklist for figure design, choosing suitable plotting and design software, improving figure aesthetics, and using simple tools and resources to create cleaner and more effective visuals. Through examples and case studies, participants will learn how to move from raw content to figures that communicate methods, workflows, and results more clearly.
Data science for storytelling


The famous statistician John Tukey once said: 'The greatest value of a picture is when it forces us to notice what we never expected to see.' As researchers, identifying this 'unseen' in your data is becoming crucial to publishing your next journal paper. While publishing papers is essential, it’s just a part of research communication; to maximise the impact of your work, it’s vital that you make your research findings accessible and engaging to a wide audience. In this workshop, we will discuss the best methods to communicate the story that your data tells.

Data visualisation basics for publishing


Explore methods and tools to create publication quality visualisations of structured, tabular data. Tools covered include MS Excel and open-source tool Raw Graphs.

On completion of this hands-on workshop, participants will be familiar with:

  • appropriate visualisation models for a dataset
  • visualisation tools and steps for tabular data
  • methods to save a visualisation to a publication quality file format.
Demystifying AI for research (V2): Introductory level class for AI beginners


An update to the original Demystifying AI for Research presented in 2023. This presentation covers the topic of how AI is involved in Research and is an introductory level class for AI beginners. The presentation talks about what AI is and what AI isn't. Covers the common ways researchers can interact with AI and ends by talking about what is going to happen in the future and just how fast AI is developing. It acts as an introduction to the ideas of AI for Research and the more practical follow-up Beginner level workshop.
From a mess to a map


This workshop will look at the problems which might be encountered turning messy text data into the very structured data needed for many kinds of computational analysis. As an example, we will start with data collected in an online survey about Australian slang and end with maps showing the comparative distribution of some slang terms across the country. We will look at parts of the workflow where computational approaches are useful (e.g. normalising typography and parsing complex responses), but will also emphasise the decisions which remain the responsibility of the researcher (e.g. categorising responses).
Copyright made practical


This interactive 1.5‑hour workshop helps researchers build confidence in navigating copyright across the research lifecycle. After a short overview of key copyright principles and why they matter, we work through real‑world scenarios - reusing figures, sharing datasets, including images in theses and other publications, and republishing your own work. Researchers will learn how to assess rights, identify risks, and make informed decisions. The session concludes with a live Q&A where participants can explore their own copyright questions.
Making the most of your qualitative transcripts with a little bit of computing


Qualitative research projects sometimes get away from us and we find that we have more material than we know what to do with. This workshop aims to give you a pathway for navigating 'too much' with a little bit of computational assistance and *without* giving up on the core qualitative foundation of of your work.

We'll work through a case study and some demonstration tools to help you:

  • Make the most of your transcripts by identifying consistency issues that will confuse a computer (and maybe your collaborators)
  • Work through marking up some transcripts in a more consistent format
  • Demonstrate some possibilities for computers to help you search and find things
  • Demonstrate computational methods that might help you address the 'big picture' or what you might be missing, while still being able to close read your transcripts
AI-assisted programming: No autopilot required


AI tools can accelerate coding, but blind trust creates fragile and insecure workflows. This workshop introduces a human in the loop approach to AI assisted programming, focusing on how to work directly with AI outputs while keeping full control of the code. It covers key principles, limitations, and guardrails for responsible use, followed by practical, example driven use of AI for code generation, refactoring, explanation, and testing in real workflows. Designed for researchers who want AI assistance rather than an autopilot.
OmniRestore: Robust universal image restoration from combined and unspecified degradations


A novel restoration framework for removing combined and unspecified combinations of degradations present in an image
Accelerate your research with cloud computing on the ARDC Nectar Research Cloud


Don't let limited computing power slow down your research. The ARDC Nectar Research Cloud provides fast and scalable computing resources tailored specifically for research.

Whether you need to run intensive data analyses and complex simulations, train AI and ML models, manage big data or collaborate seamlessly across institutions, Nectar gives you the computational power to scale up your work.

Join us for an interactive introduction to Nectar, where we'll cover:

  • What is cloud computing and how can it accelerate my research?
  • Real-world case studies powered by Nectar.
  • Guidance on accessing tutorials and ongoing support.
  • Live Q&A to answer your questions.
  • No cloud computing or coding experience is required to attend.

    Python behind-the-scenes: What data scientists should know


    Data scientists rely on Python's numpy, scipy, pandas and matplotlib libraries for most of their analysis. Why is this? Do you know the Python code these packages use?

    In this workshop we'll dive into the Python code behind common data science tools, focusing on features often missed during data-oriented tutorials, to bridge the gap between Python for programmers and Python for data scientists.

    We'll look at questions like:

    • 'When should I loop?'
    • 'Why can't I use Python's built-in data types?'
    • 'What really *is* a Dataframe?'
    Dyadic Data Analysis Using R


    This hands-on workshop introduces dyadic data analysis using R, focusing on data collected from interdependent pairs (e.g., partners, students–teachers, supervisors–employees). Participants will learn the logic of dyadic data, key assumptions, and how to apply appropriate statistical models in R through practical examples.

    By the end of the workshop, participants will be able to identify dyadic data structures and conduct a basic dyadic data analysis using R.

    Introduction to computational text analytics


    Do you have more text than you know what to do with? Did you collect data including text for your project and now feel overwhelmed when you try to analyse? Is there too much? Are you doing the same thing over and over again and feeling like you're not using your time efficiently? Are you worried about missing the forest for the trees (or the trees for the forest)? If any of these apply to you (or you're just interested in learning more) this workshop is for you.

    This workshop will introduce the fundamentals of computational text analysis using LADAL. We'll start with the key questions of why and where computational methods might be appropriate for your work before demonstrating a few key computational methods that are relevant for many researchers.

    The research orchestrator: Building multi-tool agentic workflows for research


    This interactive workshop offers a beginner level explanation of agentic workflows for research. This workshop covers the change in approaches to using AI for research from simple prompt-based linear chats to more flexible and capable agentic workflows. Rather than providing a complex multi-step prompt researchers can build focused AI-powered tools to accomplish specific tasks in a process while still maintaining control over the entire process. The workshop also demonstrates how these tools can be orchestrated by a supervising AI to allow for even more flexibility and power. The workshop contains a demonstration and audience-based exercises designed to get researchers to think about how these agents can be used in their research. Finally we talk about the future of what agentic workflows tools might be and discuss the current limitations of agents found in the foundation models.
    Using the WildObs image platform


    This workshop introduces the WildObs Image Platform and demonstrates how it can streamline camera trap data processing. Participants will learn the step-by-step workflow for uploading camera trap images, running automated species classification using AI models, and verifying results through the platform’s human-in-the-loop review system. We will also show how processed datasets can be downloaded and used for further ecological analysis and reporting.
    What makes a “Good” plot good? - Designing effective data visualisations


    In an era of rapidly increasing data volume and accessibility, the ability to design effective data visualisations is a critical research skill. In this workshop we will explore what makes a “good” plot good through a focus on the philosophy, physiology, and narrative storytelling role of visualisation, rather than on specific tools or software. Drawing on principles of human perception, cognition, and scientific storytelling, the session discusses how design choices shape what readers actually see and understand. This workshop aims to provide participants with a transferable understanding of design principles for creating visualisations that distil complex information into clear, meaningful, and persuasive research narratives.
    Harness the power of your ORCID


    This practical session will walk you through the essentials of curating your ORCID profile to streamline future ARC and NHMRC grant applications and otherwise captialise on your ORCID.
    Thematic analysis using Obsidian


    Thematic analysis has become a popular method in the humanities and other disciplines. However, the process of extracting codes and developing themes can be confusing, tedious and time consuming. The markup and notation tool Obsidian makes thematic coding much faster and more useful. This workshop shows how to set up your Obsidian workspace for thematic analysis, including creating the relevant bases and templates. It then goes through the process of coding a short piece of text to illustrate a suggested way to record text codes and themes.
    Deploying a self-hosted RAG AI Mentor on ARDC Nectar


    Want to build an AI assistant that guides learning rather than just handing out answers? This 1-hour demo walks through deploying a secure, Retrieval-Augmented Generation (RAG) chatbot using Dify on the ARDC Nectar Research Cloud.

    Using an Information Data Spaces (IDSA) learning mentor as our practical use-case, we will demystify the end-to-end pipeline:

    • Structuring raw documentation for optimal AI retrieval.
    • Provisioning open-source hosting and deploying Dify.
    • Leveraging Google IDE tools to streamline configuration.
    • Crafting 'state-machine' system prompts that prioritise guided learning.
    • Evaluating RAG outputs to prevent hallucinations.
    • Launching your application and API.

    Finally, we will explore registering any future RAG training chatbot you develop on DReSA (dresa.org.au), Australia’s national registry for training events and materials, to ensure your tool reaches a nationwide audience.

    Research smarter with GenAI and digital tools at eGrad School


    Artificial Intelligence is rapidly reshaping research practice, yet many researchers remain unsure where to begin, how to use AI responsibly, or how to meaningfully integrate it into their existing workflows.

    This presentation introduces practical, research-focused uses of generative AI tools that can support HDR students, early career researchers, and supervisors across disciplines. Participants will explore how GenAI can assist with literature review, research planning, writing support, coding, data analysis, and research communication with a strong emphasis on ethical and transparent use.

    The session will also showcase the eGrad School platform, which offers open-access digital modules designed to strengthen researchers’ AI literacy, digital capability, and professional skills. Together, these tools and resources aim to empower researchers to work smarter, more efficiently, and with integrity in an evolving digital research landscape.

    Cinematic thinking in real-time visual storytelling


    This session explores how cinematic thinking and real-time visualisation practices are applied in contemporary creative industries to communicate ideas, narratives, and complex information. Drawing from professional experience in virtual production and real-time cinematic workflows, the session provides insight into how visual storytelling is designed, constructed, and communicated in practice.

    The session presents industry case studies and visual breakdowns of real-world projects to demonstrate how cinematic language is translated into real-time environments. Key areas of focus include framing, lighting, camera movement, composition, and narrative design within immersive and screen-based contexts.

    Participants will also engage in guided discussion activities that explore how these approaches can inform research communication and interdisciplinary creative practice. The session is designed to be accessible to a broad audience and does not require prior technical experience.

    Managing Python environments and installations: Bring your problems


    Python can be surprisingly difficult to set up well, partly because there are so many options. We'll cover some of the differences, pros and cons, and have some time to troubleshoot issues that you might be encountering.

    We'll cover questions like,

  • 'Should I use Anaconda?'
  • 'What is a virtual environment?'
  • 'Please help me: Python is not a recognised command, cmdlet etc.!!!'
  • Data donations using data download packages


    In this session, the participants will learn about a new ethical framework to collect data from social media platforms (Instagram, YouTube, ChatGPT) using data download packages (DDPs) for social science research. I will introduce:

    • What DDPs contain and what kind of research questions they can help us answer.
    • How to make sure data is handled properly.
    • How to enrol respondents and manage their data.

    By the end of the session, the participants will be able to:

    • Choose the right platform for their research based on the research question.
    • Explore DDPs.
    • Create data donation projects and enrol participants.
    AI tools and considerations for literature reviews


    Participants will be introduced to the effective use of gen AI tools for conducting literature reviews, as well as navigating ethical considerations in their application.
    Advanced Excel and spreadsheeting for research data


    This workshop will demonstrate best practices in using Microsoft Excel for research, as well as advanced functions and capabilities.

    The content covered by this workshop will include:

    • Best practices for laying out data in spreadsheets
    • Formatting cells and data
    • Conditional formatting
    • Data exports and file formats
    • Advanced functions including xlookup, concatenate, and more
    • Tables and pivot tables
    • Identifying and removing duplicate cells
    • Advanced searching and filtering
    • Quality of life features
    Editing with Gen AI: Dos and don'ts


    Exploring Gen AI use for editing and proofreading, issues, concerns, and ensuring you maintain your authorial voice.
    How do we work together? Neurodivergence in the research student-supervisor relationship


    In this session, participants will learn about neurodivergence and research. Topics will include disability standards in education, reasonable adjustments, and the relational approach to supervision – including how to work well with their supervisors. This will be supported through scenario-based content and practical activities. Participants will leave the session with a completed checklist and reflection detailing how they work best, which they can then apply in conversations with their research supervisors.
    AI as your coding assistant: Running NCBI BLAST on Linux without being a programmer


    Ever wanted to run bioinformatics tools from the command line but didn't know where to start? This hands-on workshop is designed for researchers with no programming background who want to harness the power of AI to make genomics analysis faster and less intimidating.

    We will work through a complete beginner workflow in four practical steps. First, participants will be introduced to the Linux command line through Windows Subsystem for Linux 2 (WSL2) — no prior Linux experience required. Second, we will install NCBI BLAST+ locally, covering the essential steps of setting up a real bioinformatics tool in a Linux environment. Third, and most importantly, participants will learn how to use a freely available large language model (such as ChatGPT) to generate, explain, and troubleshoot Bash scripts for running BLAST searches — turning natural language questions into working code without writing a single line yourself. Finally, time permitting, we will demonstrate how an AI agent can go one step further: autonomously executing BLAST queries, parsing outputs, and returning results on your behalf.

    By the end of this session, participants will be comfortable navigating a Linux terminal, will have a working BLAST installation, and will leave with a practical mental model for using AI as a coding assistant in their own research — applicable well beyond BLAST to any command-line bioinformatics tool.

    All you need is a laptop with WSL2 installed. A setup guide will be circulated to registered participants before the session.

    Data driven endurance sports research: A workshop in R


    We are going deep into analysis of GPS data for athlete performance modelling. We will work with Trail Running data, but analysis methods are transfereable to a wide array of disciplines.
    Accelerating systematic reviews with automation tools


    Background: The use of automation tools to assist in systematic review production is becoming more common. Most tools assist with, 1) identifying and removing duplicate records and 2) screening and selection of relevant studies. The Evidence Review Accelerator (TERA) supports all tasks in review production. TERA assists with designing the review question, searching for and deduplicating studies, selecting studies, data extraction, conducting multiple types of meta-analysis and writing the methods and results.

    TERA is purpose built by the two-week systematic review team (2weekSR) to enable full reviews to be completed in vastly reduced timeframes. We published an analysis of 10 of our reviews showing median time to completion was 11 workdays.

    Objectives: This workshop will show participants how to use TERA and how it fits into review workflows to improve the speed and quality of conducting systematic reviews.

    Description: The workshop will comprise live demonstrations of TERA, conducted by the TERA developers and expert users of the tools. The live demonstration will be interspersed with hands on tutorials in using the tools. Interactive feedback with the presenters will be encouraged, and sufficient time for this is incorporated in the design of the workshop. The expert skills of the presenters in both conducting reviews and using the tools is a key component of the workshop. All the tools in the workshop are free and available to be used via the Evidence Review Accelerator website: https://tera-tools.com/

    Activities/Interaction Plans: The workshop will comprise of the following plan: 1) a brief introduction; 2) an interactive component with the presenters and participants designing and writing a review using the Methods Wizard; 3) a demonstration of creating focused searches using the Word Frequency Analyser, SearchRefiner, Polyglot Search Translator and the Deduplicator; 4) a demonstration of Generative AI powered screening using MechaScreener; 5) a demonstration of citation searching with SpiderCite; 6) interactive discussion and demonstration of the meta-analysis tools, MetaInsight, MetaDTA, MetaWise; 7) Using TERA Farmer to test whether all relevant studies have been found.

    Lost in translation? Using AI tools for cross-cultural and multilingual research


    AI tools are increasingly used by researchers to translate documents, draft communications, and process data across languages. But translation is only part of the challenge — and often the easier part. This session draws on real examples from ethnographic fieldwork in rural Yunnan, China, to explore where AI tools genuinely help, where they fall short, and what the researcher must bring that AI cannot. We will look at three concrete cases. Participants will leave with practical strategies for integrating AI into multilingual research workflows while maintaining cultural rigour and research ethics.
    Stop presenting: Start storytelling


    Most scientific presentations fail not because the research is weak—but because the story is buried.

    This workshop teaches researchers how to transform results and complex science into compelling, logically structured presentations without sacrificing rigor. Participants will learn how to apply narrative principles to scientific talks—clarifying their core message, structuring results for maximum impact, and designing slides that support thinking rather than distract from it.

    Through real examples and practical frameworks, attendees will leave with concrete tools to:

    • Identify and sharpen the central message of their work
    • The benefits of 'thinking outside the box' on occasion
    • Structure presentations around a clear narrative arc
    • Design figures and slides that enhance comprehension
    • Avoid common pitfalls that undermine clear messaging
    • Communicate complex research with clarity and confidence

    This is not a workshop on aesthetics. It is a workshop on thinking clearly—and making that thinking visible.

    Species distribution modelling in EcoCommons: Introduction to platform and notebook workflows for research


    EcoCommons Australia offers a comprehensive suite of resources for ecological modelling, including an intuitive, user-friendly platform featuring thousands of trusted datasets and a range of expert-developed workflows for species distribution and community modelling.

    This workshop will begin with a brief introduction into species distribution model (SDM) theory, followed by a guided tour of the EcoCommons platform and coding notebook workflows. There will also be a focus on selecting appropriate occurrence and environmental data for research aims and questions.

    We will cover:

    • Basic concepts and applications of SDMs
    • How to access and acquire biodiversity records from open data portals
    • An introduction to the EcoCommons platform and its 59,000 curated environmental datasets
    • Selecting sensible model data for research
    • Building SDMs using a selection of the 19 available algorithms on the EcoCommons platform (detailed example of GLM and Maxent algorithms)
    • Exploring and interpreting model outputs.
    • Using EcoCommons notebooks to convert between spatial file formats and modify spatial outputs

    By the end of the workshop, attendees will understand how to:

    • Navigate the EcoCommons platform
    • Select fit-for-purpose data for your research question
    • Run robust and repeatable SDMs
    • Produce accurate and meaningful results and modify model outputs for downstream workflows and reporting
    • Access and use EcoCommons coding notebooks
    Designing literature review matrices with Excel and Power BI


    This interactive workshop introduces a structured and data-driven approach to conducting literature reviews using Excel-based matrix outlining and Power BI dashboards. Participants will learn how to design and build a relational literature review database that supports systematic organization, citation extraction, and writing alignment for academic research (e.g., theses, journal articles, and systematic reviews).

    The session will cover:

    • Designing a structured literature review database in Excel (articles, metadata, classification)
    • Building a citation extraction matrix aligned with writing purposes
    • Linking sources to themes, categories, and sections of a research document
    • Using Power BI to create dashboards for exploratory analysis (e.g., trends by year, topic distribution, methodological patterns)
    • Leveraging dashboards to support sensemaking and academic writing

    By the end of the workshop, participants will have a reusable workflow to manage, analyse, and operationalise literature for research writing.

    Software Carpentry: Plotting and Programming in Python (1.5 days)


    An introduction to Python that places an emphasis on working with and visualising tabular data.

    Note that this workshop runs over 1.5 days, from 9:30 am on Tuesday 23rd, to 12:30 pm on Wednesday 24th. Please only book this workshop if you plan to attend it in its entirety.

    Software Carpentry: R for Reproducible Scientific Analysis (1.5 days)


    An introduction to R that places an emphasis on making data analysis reproducible, using examples of data processing and visualisation.

    Note that this workshop runs over 1.5 days, from 9:30 am on Tuesday 23rd, to 12:30 pm on Wednesday 24th. Please only book this workshop if you plan to attend it in its entirety.